Skip to content

Conversation

@Dwij1704
Copy link
Member

@Dwij1704 Dwij1704 commented Apr 11, 2025

📥 Pull Request

📘 Description
Closes #912
Nuked previous implementation

  • Complete API Coverage:
    • Instruments messages.create (modern API) and completions.create (legacy API)
    • Supports both synchronous and asynchronous client interfaces
    • Special handling for streaming responses with both sync and async patterns
    • Tool usage capture for Claude's function calling capabilities

Testing

Tested the instrumentation with the following script (Covers all scenarios)

"""Comprehensive examples of Anthropic API usage with AgentOps instrumentation.

This file demonstrates various ways to use Anthropic's Claude models with AgentOps
instrumentation, including:

1. Synchronous basic completions
2. Asynchronous completions
3. Streaming completions
4. Tool/function calling
5. Complex content with multiple content blocks
"""

import os
import json
import asyncio
from typing import List, Dict, Any, Optional
from dotenv import load_dotenv
import agentops
# Import Anthropic
from anthropic import Anthropic, AsyncAnthropic

# Load environment variables from .env file
load_dotenv()

# Initialize AgentOps with minimal tags
agentops.init(
    tags=["anthropic-comprehensive-examples"]
)

# Initialize Anthropic client
client = Anthropic()
async_client = AsyncAnthropic()

# Constants
MODEL = "claude-3-haiku-20240307"


def example_1_basic_completion():
    """Basic synchronous completion with simple text content."""
    print("\n===== Example 1: Basic Synchronous Completion =====")
    
    response = client.messages.create(
        model=MODEL,
        max_tokens=100,
        temperature=0.7,
        system="You are a helpful AI assistant answering questions concisely.",
        messages=[
            {"role": "user", "content": "What is the capital of France?"}
        ]
    )
    
    print(f"Response: {response.content[0].text}")
    print(f"Usage: {response.usage}")


async def example_2_async_completion():
    """Asynchronous completion with simple text content."""
    print("\n===== Example 2: Asynchronous Completion =====")
    
    response = await async_client.messages.create(
        model=MODEL,
        max_tokens=100,
        temperature=0.7,
        system="You are a helpful AI assistant answering questions concisely.",
        messages=[
            {"role": "user", "content": "What is the largest planet in our solar system?"}
        ]
    )
    
    print(f"Response: {response.content[0].text}")
    print(f"Usage: {response.usage}")


def example_3_streaming():
    """Streaming completion with progress updates."""
    print("\n===== Example 3: Streaming Completion =====")
    
    with client.messages.stream(
        model=MODEL,
        max_tokens=150,
        temperature=0.7,
        system="You are a helpful AI assistant providing step-by-step explanations.",
        messages=[
            {"role": "user", "content": "Explain how photosynthesis works in three steps."}
        ]
    ) as stream:
        print("Streaming response:")
        for text in stream.text_stream:
            print(text, end="", flush=True)
        
        print("\n")
        print(f"Final message ID: {stream.get_final_message().id}")
        print(f"Usage: {stream.get_final_message().usage}")


async def example_4_async_streaming():
    """Asynchronous streaming completion."""
    print("\n===== Example 4: Asynchronous Streaming =====")
    
    async with async_client.messages.stream(
        model=MODEL,
        max_tokens=150,
        temperature=0.7,
        system="You are a helpful AI assistant summarizing information.",
        messages=[
            {"role": "user", "content": "Summarize the water cycle in three sentences."}
        ]
    ) as stream:
        print("Streaming response:")
        async for text in stream.text_stream:
            print(text, end="", flush=True)
        
        print("\n")
        final_message = await stream.get_final_message()
        print(f"Final message ID: {final_message.id}")
        print(f"Usage: {final_message.usage}")


def example_5_tools():
    """Example using tools/function calling with Claude."""
    print("\n===== Example 5: Tool/Function Calling =====")
    
    # Define some tools
    tools = [
        {
            "name": "get_weather",
            "description": "Get the current weather in a location",
            "input_schema": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The unit of temperature to use"
                    }
                },
                "required": ["location"]
            }
        },
        {
            "name": "get_population",
            "description": "Get the population of a city",
            "input_schema": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The name of the city"
                    },
                    "country": {
                        "type": "string",
                        "description": "The country of the city"
                    }
                },
                "required": ["city"]
            }
        }
    ]
    
    # Mock function to handle tool calls
    def handle_tool_call(tool_name: str, tool_input: Dict[str, Any]) -> str:
        """Mock function to handle tool calls and return string result."""
        if tool_name == "get_weather":
            location = tool_input.get("location", "unknown")
            unit = tool_input.get("unit", "celsius")
            result = {
                "temperature": 22 if unit == "celsius" else 72,
                "condition": "sunny",
                "humidity": "50%",
                "location": location,
                "unit": unit
            }
            return json.dumps(result)
        elif tool_name == "get_population":
            city = tool_input.get("city", "unknown")
            result = {
                "city": city,
                "population": "8.4 million" if city.lower() == "new york" else "unknown",
                "year": 2023
            }
            return json.dumps(result)
        return json.dumps({"error": "Unknown tool"})
    
    # Initial message
    response = client.messages.create(
        model=MODEL,
        max_tokens=200,
        temperature=0.7,
        system="You are a helpful assistant that uses tools when needed.",
        messages=[
            {"role": "user", "content": "What's the current weather in New York and how does it compare to the city's population?"}
        ],
        tools=tools
    )
    
    # Process any tool calls
    messages = [
        {"role": "user", "content": "What's the current weather in New York and how does it compare to the city's population?"}
    ]
    
    # Check if the model wants to use tools
    while any(block.type == "tool_use" for block in response.content if hasattr(block, 'type')):
        tool_use_blocks = [block for block in response.content if getattr(block, 'type', None) == "tool_use"]
        
        # Create a new message with tool_results
        messages.append({"role": "assistant", "content": response.content})
        
        # Process each tool call
        tool_results = []
        for tool_block in tool_use_blocks:
            # Handle the tool call - returns a JSON string
            result = handle_tool_call(tool_block.name, tool_block.input)
            
            # Add the result as a string
            tool_results.append({
                "type": "tool_result",
                "tool_use_id": tool_block.id,
                "content": result  # String content instead of object
            })
        
        # Add the tool results to messages
        messages.append({"role": "user", "content": tool_results})
        
        # Get the next response
        response = client.messages.create(
            model=MODEL,
            max_tokens=200,
            temperature=0.7,
            system="You are a helpful assistant that uses tools when needed.",
            messages=messages,
            tools=tools
        )
    
    # Final answer
    print("Tool conversation complete. Final answer:")
    for block in response.content:
        if getattr(block, 'type', None) == "text":
            print(block.text)
    
    print(f"Usage: {response.usage}")


def example_6_complex_content():
    """Example with complex content including multiple content blocks."""
    print("\n===== Example 6: Complex Content Structure =====")
    
    # Define a message with complex content structure
    complex_messages = [
        {
            "role": "user", 
            "content": [
                {
                    "type": "text",
                    "text": "I'd like information about Paris and its weather."
                }
            ]
        },
        {
            "role": "assistant",
            "content": [
                {
                    "type": "text",
                    "text": "I'd be happy to help you with information about Paris. To provide accurate weather information, I'll need to check the current conditions."
                },
                {
                    "type": "tool_use",
                    "name": "get_weather",
                    "id": "tool_1",
                    "input": {
                        "location": "Paris, France",
                        "unit": "celsius"
                    }
                }
            ]
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "tool_result",
                    "tool_use_id": "tool_1",
                    "content": json.dumps({
                        "temperature": 18,
                        "condition": "partly cloudy",
                        "humidity": "65%",
                        "location": "Paris, France",
                        "unit": "celsius"
                    })  # JSON string instead of dictionary
                }
            ]
        }
    ]
    
    # Get completion with the complex content
    response = client.messages.create(
        model=MODEL,
        max_tokens=300,
        temperature=0.7,
        system="You are a helpful travel assistant with knowledge about cities and their weather.",
        messages=complex_messages
    )
    
    print("Response to complex content:")
    for block in response.content:
        if getattr(block, 'type', None) == "text":
            print(block.text)
    
    print(f"Usage: {response.usage}")


def example_7_multi_turn_conversation():
    """Example of a multi-turn conversation with context tracking."""
    print("\n===== Example 7: Multi-turn Conversation =====")
    
    # Initial conversation setup
    conversation = [
        {"role": "user", "content": "Hi, I'm planning a trip to Japan next month."}
    ]
    
    # First response
    response = client.messages.create(
        model=MODEL,
        max_tokens=200,
        temperature=0.7,
        system="You are a helpful travel assistant providing concise advice.",
        messages=conversation
    )
    
    # Add assistant response to conversation
    conversation.append({"role": "assistant", "content": response.content[0].text})
    
    print("Assistant: " + response.content[0].text)
    
    # Second user message
    conversation.append({"role": "user", "content": "I'm interested in traditional Japanese cuisine. What dishes should I try?"})
    
    # Second response
    response = client.messages.create(
        model=MODEL,
        max_tokens=200,
        temperature=0.7,
        system="You are a helpful travel assistant providing concise advice.",
        messages=conversation
    )
    
    # Add assistant response to conversation
    conversation.append({"role": "assistant", "content": response.content[0].text})
    
    print("Assistant: " + response.content[0].text)
    
    # Third user message with context from previous exchanges
    conversation.append({"role": "user", "content": "Great suggestions! Which of these dishes can I find in Tokyo specifically?"})
    
    # Third response
    response = client.messages.create(
        model=MODEL,
        max_tokens=200,
        temperature=0.7,
        system="You are a helpful travel assistant providing concise advice.",
        messages=conversation
    )
    
    print("Assistant: " + response.content[0].text)
    print(f"Total tokens used in conversation: {response.usage.input_tokens + response.usage.output_tokens}")


async def run_async_examples():
    """Run all asynchronous examples."""
    await example_2_async_completion()
    await example_4_async_streaming()


def main():
    """Run all examples."""
    print("Starting Anthropic API Examples with AgentOps Instrumentation")
    
    # Run synchronous examples
    example_1_basic_completion()
    example_3_streaming()
    example_5_tools()
    example_6_complex_content()
    example_7_multi_turn_conversation()
    
    # Run asynchronous examples
    asyncio.run(run_async_examples())
    
    print("\nAll examples completed successfully!")


if __name__ == "__main__":
    main() 

image

@codecov
Copy link

codecov bot commented Apr 11, 2025

@tcdent
Copy link
Contributor

tcdent commented Apr 11, 2025

Dude this is very well done! Just a couple questions and minor notes.

…e unused EventHandler class. Update original_handler type to Any for flexibility.
…instrumentation. Introduce helper functions to handle various content types and extract attributes for telemetry. Update `get_message_request_attributes` to streamline message handling and support complex content structures.
@Dwij1704 Dwij1704 requested a review from tcdent April 11, 2025 16:53
… to replace old attribute keys with new `MessageAttributes` constants for correct indexing.
Copy link
Contributor

@tcdent tcdent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@bboynton97 bboynton97 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd really love to see unit testing on this because it seems pretty low lift. Other than that, GREAT job. this is very well extended, readable and does the job!! nice work @Dwij1704!

@dot-agi
Copy link
Member

dot-agi commented Apr 11, 2025

Tested against anthropic-example-asynchronous notebook, needs changes

Copy link
Member

@dot-agi dot-agi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A common error when running the notebooks is this -

(DEBUG) 🖇 AgentOps: [agentops.instrumentation.anthropic] Error creating simplified prompts: type object 'MessageAttributes' has no attribute 'LLM_PROMPTS'
(DEBUG) 🖇 AgentOps: [agentops.instrumentation.anthropic] Unrecognized return type: <class 'anthropic.Stream'>

@Dwij1704 Dwij1704 requested a review from dot-agi April 11, 2025 18:34
@Dwij1704
Copy link
Member Author

I'd really love to see unit testing on this because it seems pretty low lift. Other than that, GREAT job. this is very well extended, readable and does the job!! nice work @Dwij1704!

image
Added test

@Dwij1704 Dwij1704 requested a review from bboynton97 April 11, 2025 21:21
Copy link
Member

@dot-agi dot-agi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything ok.

Good job @Dwij1704 🚀

@bboynton97
Copy link
Contributor

dwij you the guy omg

@bboynton97 bboynton97 merged commit d4be724 into main Apr 15, 2025
7 of 9 checks passed
@bboynton97 bboynton97 deleted the fix-anthropic-streaming branch April 15, 2025 22:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

5 participants